16 research outputs found

    Deciding on appropriate use of force: human-machine interaction in weapons systems and emerging norms

    Get PDF
    This article considers the role of norms in the debate on autonomous weapons systems (AWS). It argues that the academic and political discussion is largely dominated by considerations of how AWS relate to norms institutionalised in international law. While this debate on AWS has produced insights on legal and ethical norms and sounded options of a possible regulation or ban, it neglects to investigate how complex human-machine interactions in weapons systems can set standards of appropriate use of force, which are politically-normatively relevant but take place outside of formal, deliberative law-setting. While such procedural norms are already emerging in the practice of contemporary warfare, the increasing technological complexity of AI-driven weapons will add to their political-normative relevance. I argue that public deliberation about and political oversight and accountability of the use of force is at risk of being consumed and normalised by functional procedures and perceptions. This can have a profound impact on future of remote-warfare and security policy

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Full text link

    Moving Beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation

    No full text
    Ongoing discussions about autonomous weapons typically share concerns of losing control and the potentially destabilizing consequences for global security. To the extent that there is any consensus among states, academics, NGOs and other commentators involved in diplomatic efforts under the auspices of the UN Convention on Certain Conventional Weapons, it is grounded in the idea that all weapons should be subject to meaningful human control. This intuitively appealing concept immediately gained traction, although at a familiar legal-political cost: nobody knows what the concept actually means in practice. Although global discourses on policy and governance are typically infused with ambiguity, abstract concepts are of little use if they ignore the operational context that confronts the military in their application. This article places this intuitively appealing concept in context, and thus examines it in operational practice. Paying attention to this military practice is important as it demonstrates that meaningful human control is not the only, or the best, approach through which to characterize the human role and govern the challenges raised by autonomous weapons

    Who is to blame for Autonomous Weapons Systems’ misdoings?

    No full text
    This Chapter analyses who (or what) should be held responsible for behaviours by autonomous weapons systems (AWS) that, were they enacted by a human agent, would qualify as internationally wrongful acts. After illustrating the structural problems which make ascription of responsibility for AWS’ activities particularly difficult, when not impossible, the alternative routes proposed to solve the ensuing responsibility gap will be assessed. The analysis will focus, in the first place, on the international criminal responsibility of the individuals who, in one way or another, are involved in the process of production, deployment and activation of the AWS. The possibility to hold the deploying State accountable for AWS’ wrongdoings will then be gauged. Subsequently, attention will be paid to the responsibility of the corporations manufacturing and/or programming the AWS. It will be observed that these options may solve some responsibility problems more effectively than critics of AWS are ready to admit. At the same time, it will be shown that, unless a no-fault liability regime is adopted, autonomy in weapons systems is bound to magnify the risk that no one may be held to answer for acts which are objectively in contrast with international legal prescriptions. Also, it will be argued that, given the complementary relationship among the various forms of responsibility under international law, proposals aimed at focusing solely on one of these at the expense of others are incapable of leading to satisfying results
    corecore